204 research outputs found

    Feasibility of Consumer Grade GNSS Receivers for the Integration in Multi-Sensor-Systems

    Get PDF
    Various GNSS applications require low-cost, small-scale, lightweight and power-saving GNSS devices and require high precision in terms of low noise for carrier phase and code observations. Applications vary from navigation approaches to positioning in geo-monitoring units up to integration in multi-sensor-systems. For highest precision, only GNSS receivers are suitable that provide access to raw data such as carrier phase, code ranges, Doppler and signal strength. A system integration is only possible if the overall noise level is known and quantified at the level of the original observations. A benchmark analysis based on a zero baseline is proposed to quantify the stochastic properties. The performance of the consumer grade GNSS receiver is determined and evaluated against geodetic GNSS receivers to better understand the utilization of consumer grade receivers. Results indicate high similarity to the geodetic receiver, even though technical limitations are present. Various stochastic techniques report normally distributed carrier-phase noise of 2mm and code-range noise of 0.5-0.8m. This is confirmed by studying the modified Allan standard deviation and code-minus-carrier combinations. Derived parameters serve as important indicators for the integration of GNSS receivers into multi-sensor-systems

    TOWARDS STEREO VISION- AND LASER SCANNER-BASED UAS POSE ESTIMATION

    Get PDF
    A central issue for the autonomous navigation of mobile robots is to map unknown environments while simultaneously estimating its position within this map. This chicken-eggproblem is known as simultaneous localization and mapping (SLAM). Asctec’s quadrotor Pelican is a powerful and flexible research UAS (unmanned aircraft system) which enables the development of new real-time on-board algorithms for SLAM as well as autonomous navigation. The relative UAS pose estimation for SLAM, usually based on low-cost sensors like inertial measurement units (IMU) and barometers, is known to be affected by high drift rates. In order to significantly reduce these effects, we incorporate additional independent pose estimation techniques using exteroceptive sensors. In this article we present first pose estimation results using a stereo camera setup as well as a laser range finder, individually. Even though these methods fail in few certain configurations we demonstrate their effectiveness and value for the reduction of IMU drift rates and give an outlook for further works towards SLAM

    What happens where during disasters? A Workflow for the multifaceted characterization of crisis events based on Twitter data

    Get PDF
    Twitter data are a valuable source of information for rescue and helping activities in case of natural disasters and technical accidents. Several methods for disaster- and event-related tweet filtering and classification are available to analyse social media streams. Rather than processing single tweets, taking into account space and time is likely to reveal even more insights regarding local event dynamics and impacts on population and environment. This study focuses on the design and evaluation of a generic workflow for Twitter data analysis that leverages that additional information to characterize crisis events more comprehensively. The workflow covers data acquisition, analysis and visualization, and aims at the provision of a multifaceted and detailed picture of events that happen in affected areas. This is approached by utilizing agile and flexible analysis methods providing different and complementary views on the data. Utilizing state‐of‐the‐art deep learning and clustering methods, we are interested in the question, whether our workflow is suitable to reconstruct and picture the course of events during major natural disasters from Twitter data. Experimental results obtained with a data set acquired during hurricane Florence in September 2018 demonstrate the effectiveness of the applied methods but also indicate further interesting research questions and directions

    Combining Supervised and Unsupervised Learning to Detect and Semantically Aggregate Crisis-Related Twitter Content

    Get PDF
    The Twitter Stream API offers the possibility to develop (near) real-time methods and applications to detect and monitor impacts of crisis events and their changes over time. As demonstrated by various related research, the content of individual tweets or even entire thematic trends can be utilized to support disaster management, fill information gaps and augment results of satellite-based workflows as well as to extend and improve disaster management databases. Considering the sheer volume of incoming tweets, it is necessary to automatically identify the small number of crisis-relevant tweets and present them in a manageable way. Current approaches for identifying crisis-related content focus on the use of supervised models that decide on the relevance of each tweet individually. Although supervised models can efficiently process the high number of incoming tweets, they have to be extensively pre-trained. Furthermore, the models do not capture the history of already processed messages. During a crisis, various and unique sub-events can occur that are likely to be not covered by the respective supervised model and its training data. Unsupervised learning offers both, to take into account tweets from the past, and a higher adaptive capability, which in turn allows a customization to the specific needs of different disasters. From a practical point of view, drawbacks of unsupervised methods are the higher computational costs and the potential need of user interaction for result interpretation. In order to enhance the limited generalization capabilities of pre-trained models as well as to speed up and guide unsupervised learning, we propose a combination of both concepts. A successive clustering of incoming tweets allows to semantically aggregate the stream data, whereas pre-trained models allow to identify potentially crisis-relevant clusters. Besides the identification of potentially crisis-related content based on semantically aggregated clusters, this approach offers a sound foundation for visualizations, and further related tasks, like event detection as well as the extraction of detailed information about the temporal or spatial development of events. Our work focuses on analyzing the entire freely available Twitter stream by combining an interval-based semantic clustering with an supervised machine learning model for identifying crisis-related messages. The stream is divided into intervals, e.g. of one hour, and each tweet is projected into a numerical vector by using state-of-the-art sentence embeddings. The embeddings are then grouped by a parametric Chinese Restaurant Process clustering. At the end of each interval, a pre-trained feed-forward neural network decides whether a cluster contains crisis-related tweets. With a further developed concept of cluster chains and central centroids, crisis-related clusters of different intervals can be linked in a topic- and even subtopic-related manner. Initial results show that the hybrid approach can significantly improve the results of pre-trained supervised methods. This is especially true for categories in which the supervised model could not be sufficiently pre-trained due to missing labels. In addition, the semantic clustering of tweets offers a flexible and customizable procedure, resulting in a practical summary of topic-specific stream content

    A multi-scale flood monitoring system based on fully automatic MODIS and TerraSAR-X processing chains

    Get PDF
    A two-component fully automated flood monitoring system is described and evaluated. This is a result of combining two individual flood services that are currently under development at DLR’s (German Aerospace Center) Center for Satellite based Crisis Information (ZKI) to rapidly support disaster management activities. A first-phase monitoring component of the system systematically detects potential flood events on a continental scale using daily-acquired medium spatial resolution optical data from the Moderate Resolution Imaging Spectroradiometer (MODIS). A threshold set controls the activation of the second-phase crisis component of the system, which derives flood information at higher spatial detail using a Synthetic Aperture Radar (SAR) based satellite mission (TerraSAR-X). The proposed activation procedure finds use in the identification of flood situations in different spatial resolutions and in the time-critical and on demand programming of SAR satellite acquisitions at an early stage of an evolving flood situation. The automated processing chains of the MODIS (MFS) and the TerraSAR-X Flood Service (TFS) include data pre-processing, the computation and adaptation of global auxiliary data, thematic classification, and the subsequent dissemination of flood maps using an interactive web-client. The system is operationally demonstrated and evaluated via the monitoring two recent flood events in Russia 2013 and Albania/Montenegro 2013

    Validation Studies of the ATLAS Pixel Detector Control System

    Full text link
    The ATLAS pixel detector consists of 1744 identical silicon pixel modules arranged in three barrel layers providing coverage for the central region, and three disk layers on either side of the primary interaction point providing coverage of the forward regions. Once deployed into the experiment, the detector will employ optical data transfer, with the requisite powering being provided by a complex system of commercial and custom-made power supplies. However, during normal performance and production tests in the laboratory, only single modules are operated and electrical readout is used. In addition, standard laboratory power supplies are used. In contrast to these normal tests, the data discussed here was obtained from a multi-module assembly which was powered and read out using production items: the optical data path, the final design power supply system using close to final services, and the Detector Control System (DCS). To demonstrate the functionality of the pixel detector system a stepwise transition was made from the normal laboratory readout and power supply systems to the ones foreseen for the experiment, with validation of the data obtained at each transition.Comment: 8 pages, 8 figures, proceedings for the Pixel2005 worksho

    How can voting mechanisms improve the robustness and generalizability of toponym disambiguation?

    Get PDF
    A vast amount of geographic information exists in natural language texts, such as tweets and news. Extracting geographic information from texts is called Geoparsing, which includes two subtasks: toponym recognition and toponym disambiguation, i.e., to identify the geospatial representations of toponyms. This paper focuses on toponym disambiguation, which is usually approached by toponym resolution and entity linking. Recently, many novel approaches have been proposed, especially deep learning-based approaches, such as CamCoder, GENRE, and BLINK. In this paper, a spatial clustering-based voting approach that combines several individual approaches is proposed to improve SOTA performance in terms of robustness and generalizability. Experiments are conducted to compare a voting ensemble with 20 latest and commonly-used approaches based on 12 public datasets, including several highly ambiguous and challenging datasets (e.g., WikToR and CLDW). The datasets are of six types: tweets, historical documents, news, web pages, scientific articles, and Wikipedia articles, containing in total 98,300 places across the world. The results show that the voting ensemble performs the best on all the datasets, achieving an average Accuracy@161km of 0.86, proving the generalizability and robustness of the voting approach. Also, the voting ensemble drastically improves the performance of resolving fine-grained places, i.e., POIs, natural features, and traffic ways.Comment: 32 pages, 15 figure

    Gaussian Processes for One-class and Binary Classification of Crisis-related Tweets

    Get PDF
    The Twitter Stream API offers the possibility to develop (near) real-time methods and applications to detect and monitor impacts of crisis events and their changes over time. As demonstrated by various related research, the content of individual tweets or even entire thematic trends can be utilized to support disaster management, fill information gaps and augment results of satellite-based workflows as well as to extend and improve disaster management databases. Considering the sheer volume of incoming tweets, it is necessary to automatically identify the small number of crisis-relevant tweets and present them in a manageable way. Current approaches for identifying crisis-related content focus on the use of supervised models that decide on the relevance of each tweet individually. Although supervised models can efficiently process the high number of incoming tweets, they have to be extensively pre-trained. Furthermore, the models do not capture the history of already processed messages. During a crisis, various and unique sub-events can occur that are likely to be not covered by the respective supervised model and its training data. Unsupervised learning offers both, to take into account tweets from the past, and a higher adaptive capability, which in turn allows a customization to the specific needs of different disasters. From a practical point of view, drawbacks of unsupervised methods are the higher computational costs and the potential need of user interaction for result interpretation. In order to enhance the limited generalization capabilities of pre-trained models as well as to speed up and guide unsupervised learning, we propose a combination of both concepts. A successive clustering of incoming tweets allows to semantically aggregate the stream data, whereas pre-trained models allow to identify potentially crisis-relevant clusters. Besides the identification of potentially crisis-related content based on semantically aggregated clusters, this approach offers a sound foundation for visualizations, and further related tasks, like event detection as well as the extraction of detailed information about the temporal or spatial development of events. Our work focuses on analyzing the entire freely available Twitter stream by combining an interval-based semantic clustering with an supervised machine learning model for identifying crisis-related messages. The stream is divided into intervals, e.g. of one hour, and each tweet is projected into a numerical vector by using state-of-the-art sentence embeddings. The embeddings are then grouped by a parametric Chinese Restaurant Process clustering. At the end of each interval, a pre-trained feed-forward neural network decides whether a cluster contains crisis-related tweets. With a further developed concept of cluster chains and central centroids, crisis-related clusters of different intervals can be linked in a topic- and even subtopic-related manner. Initial results show that the hybrid approach can significantly improve the results of pre-trained supervised methods. This is especially true for categories in which the supervised model could not be sufficiently pre-trained due to missing labels. In addition, the semantic clustering of tweets offers a flexible and customizable procedure, resulting in a practical summary of topic-specific stream content

    Combining Supervised and Unsupervised Learning to Detect and Semantically Aggregate Crisis-Related Twitter Content

    Get PDF
    Twitter is an immediate and almost ubiquitous platform and therefore can be a valuable source of information during disasters. Current methods for identifying and classifying crisis-related content are often based on single tweets, i.e., already known information from the past is neglected. In this paper, the combination of tweet-wise pre-trained neural networks and unsupervised semantic clustering is proposed and investigated. The intention is to (1) enhance the generalization capability of pre-trained models, (2) to be able to handle massive amounts of stream data, (3) to reduce information overload by identifying potentially crisis-related content, and (4) to obtain a semantically aggregated data representation that allows for further automated, manual and visual analyses. Latent representations of each tweet based on pre-trained sentence embedding models are used for both, clustering and tweet classification. For a fast, robust and time-continuous processing, subsequent time periods are clustered individually according to a Chinese restaurant process. Clusters without any tweet classified as crisis-related are pruned. Data aggregation over time is ensured by merging semantically similar clusters. A comparison of our hybrid method to a similar clustering approach, as well as first quantitative and qualitative results from experiments with two different labeled data sets demonstrate the great potential for crisis-related Twitter stream analyses

    Review article: Detection of actionable tweets in crisis events

    Get PDF
    Messages on social media can be an important source of information during crisis situations. They can frequently provide details about developments much faster than traditional sources (e.g., official news) and can offer personal perspectives on events, such as opinions or specific needs. In the future, these messages can also serve to assess disaster risks. One challenge for utilizing social media in crisis situations is the reliable detection of relevant messages in a flood of data. Researchers have started to look into this problem in recent years, beginning with crowdsourced methods. Lately, approaches have shifted towards an automatic analysis of messages. A major stumbling block here is the question of exactly what messages are considered relevant or informative, as this is dependent on the specific usage scenario and the role of the user in this scenario. In this review article, we present methods for the automatic detection of crisis-related messages (tweets) on Twitter. We start by showing the varying definitions of importance and relevance relating to disasters, leading into the concept of use case-dependent actionability that has recently become more popular and is the focal point of the review paper. This is followed by an overview of existing crisis-related social media data sets for evaluation and training purposes. We then compare approaches for solving the detection problem based (1) on filtering by characteristics like keywords and location, (2) on crowdsourcing, and (3) on machine learning technique. We analyze their suitability and limitations of the approaches with regards to actionability. We then point out particular challenges, such as the linguistic issues concerning social media data. Finally, we suggest future avenues of research and show connections to related tasks, such as the subsequent semantic classification of tweets
    • 

    corecore